1,603 research outputs found

    A Corporate Ethical Compass Tool to Measure Public Relations Decision Making

    Get PDF
    Corporate Public Relations (PR) Ethics is always a top research topic in theory and practice as PR practitioners often have numerous ethical choices to make when addressing controversial issues. However, how to quantify or measure the ethical alternatives of PR decision making and help public relations practitioners make more ethically correct decisions is an open problem. To address this problem, this thesis proposes the PR Ethical Compass as a measuring tool or technique to help corporations justify and evaluate the consequences of their ethical public relations decision making. To investigate the advantages and disadvantages of the Ethical Compass technique, qualitative research was used to first identify ten corporate public relation cases, then quantitative research was used to collect public opinions and corporate financial data to test each case against the EthCom Rating calculated by the tool. The Ethical Compass tool is directed towards PR practitioners and all those interested in public communication who seek an effective mechanism for measuring the projected outcome of corporate ethical decision-making

    Motivational Mindsets about Change: Integrating Lay Theories of Personal and Situational Malleability.

    Full text link
    The study of lay theories focuses on understanding people’s fundamental beliefs, the interpretations of the world that they shape, and their regulatory consequences. Central to this scientific endeavor is the subject of stability and changeability—a cornerstone concept of human motivation (Weiner, 1985). Theories of attribute stability motivate self-validation through performance and dispositional judgments of others, whereas theories of attribute malleability facilitate change-directed efforts and expectations of improvement (Kammrath & Peetz, 2012; Molden & Dweck, 2006). Thus far, research has primarily focused on people’s beliefs about their personal attributes (“self theories”); comparatively less has elucidated the implications of people’s beliefs about the external world (“situation theories”). The goal of this dissertation is to expand our understanding of how self theories and situation theories work and to introduce a new theoretical framework that integrates them. In Chapter 1, I introduce the lay theories of change literature and provide a general overview of the following chapters. In Chapter 2, I test an important boundary condition of previous self theory research: choice context. Four studies show that offering people the choice between persisting or quitting on an intellectual task replicates conventional lay theory differences in persistence, but these differences are eliminated when people’s choices are expanded to include switching problems. In Chapter 3, I examine the effects of people’s situation theories on behavior. Four studies show that construing situations as malleable rather than fixed galvanizes action to change unfavorable circumstances. In Chapter 4, I assess the implications of lay theories about how people should interact with their environments to achieve their goals. When it comes to achieving passion for work, some people believe that they should find work compatible with their interests whereas others believe that it comes through cultivating competence. These two mindsets lead to different affective forecasts and choices, but both are similarly effective at attaining passion. Assimilating these and past findings in Chapter 5, I propose the “Self by Situation Change” (SSC) model as a heuristic framework that integrates self and situation theories. Finally, I wrap up the dissertation with future directions and concluding thoughts in Chapter 6.PhDPsychologyUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/113483/1/patchen_1.pd

    Adv3D: Generating 3D Adversarial Examples in Driving Scenarios with NeRF

    Full text link
    Deep neural networks (DNNs) have been proven extremely susceptible to adversarial examples, which raises special safety-critical concerns for DNN-based autonomous driving stacks (i.e., 3D object detection). Although there are extensive works on image-level attacks, most are restricted to 2D pixel spaces, and such attacks are not always physically realistic in our 3D world. Here we present Adv3D, the first exploration of modeling adversarial examples as Neural Radiance Fields (NeRFs). Advances in NeRF provide photorealistic appearances and 3D accurate generation, yielding a more realistic and realizable adversarial example. We train our adversarial NeRF by minimizing the surrounding objects' confidence predicted by 3D detectors on the training set. Then we evaluate Adv3D on the unseen validation set and show that it can cause a large performance reduction when rendering NeRF in any sampled pose. To generate physically realizable adversarial examples, we propose primitive-aware sampling and semantic-guided regularization that enable 3D patch attacks with camouflage adversarial texture. Experimental results demonstrate that the trained adversarial NeRF generalizes well to different poses, scenes, and 3D detectors. Finally, we provide a defense method to our attacks that involves adversarial training through data augmentation. Project page: https://len-li.github.io/adv3d-we

    MEDL-U: Uncertainty-aware 3D Automatic Annotation based on Evidential Deep Learning

    Full text link
    Advancements in deep learning-based 3D object detection necessitate the availability of large-scale datasets. However, this requirement introduces the challenge of manual annotation, which is often both burdensome and time-consuming. To tackle this issue, the literature has seen the emergence of several weakly supervised frameworks for 3D object detection which can automatically generate pseudo labels for unlabeled data. Nevertheless, these generated pseudo labels contain noise and are not as accurate as those labeled by humans. In this paper, we present the first approach that addresses the inherent ambiguities present in pseudo labels by introducing an Evidential Deep Learning (EDL) based uncertainty estimation framework. Specifically, we propose MEDL-U, an EDL framework based on MTrans, which not only generates pseudo labels but also quantifies the associated uncertainties. However, applying EDL to 3D object detection presents three primary challenges: (1) relatively lower pseudolabel quality in comparison to other autolabelers; (2) excessively high evidential uncertainty estimates; and (3) lack of clear interpretability and effective utilization of uncertainties for downstream tasks. We tackle these issues through the introduction of an uncertainty-aware IoU-based loss, an evidence-aware multi-task loss function, and the implementation of a post-processing stage for uncertainty refinement. Our experimental results demonstrate that probabilistic detectors trained using the outputs of MEDL-U surpass deterministic detectors trained using outputs from previous 3D annotators on the KITTI val set for all difficulty levels. Moreover, MEDL-U achieves state-of-the-art results on the KITTI official test set compared to existing 3D automatic annotators.Comment: 6 pages Main, 1 page Reference, 5 pages Appendi
    corecore